2014-05-28

Can I have password for my hotel room's shower?

A conversation you never get at a hotel when you check in:
"how many people will be having showers?"
"oh, three of us"
"OK, here are three vouchers for hot water. Keep them handy as you'll need to retype them at random points in the day"
"thank you. Is the login screen in a random EU language and in a font that looks really tiny when I try to enter it, with a random set of characters that are near impossible to type reliably on an on-screen keyboard especially as the UI immediately converts them to * symbols out of a misguided fear that someone will be looking over my shoulder trying to steal some shower-time?"
"Why, yes -how very perceptive of you. Oh, one more thing -hot water quotas"
"hot water quotas?"
"yes, every voucher is good for 100 Litres of water/day. If you go over that rate then you will be billed at 20 c/Litre."
That's a lot!
"Yes, we recommend you only have quick showers. But don't worry, the flow rate of the shower is very low on this hot water scheme, so you can still have three minutes worth of showering without having to worry"
"'this' hot water scheme?"
"yes -you can buy a premium-hot-water-upgrade that not only gives you 500L/day, it doubles the flow rate of the shower.
"oh, I think I will just go to the cafe round the corner -they have free hot water without any need for a login"
"if that is what you want. Is there anything else?"
"Yes, where is my room?"
"It's on the 17 floor -the stairs are over there. With your luggage you could get everything up in two goes -it will only take about fifteen minutes"
"17 floors! Fifteen Minutes! Don't you have a lift?"
"Ah -do you mean our premium automated floor-transport service?  Why yes, we do have one. It won't even add much to your bill. Would you like to buy a login? First -how many people will plan on using the lift every day -and how many times?

2014-05-14

LG: this is not your television -certainly not your kids'

If there's one difference between the current "internet of things" concept with predecessors: ubiquitous computing, JINI, cooltown, etc, it is that it is  not just devices with internet connectivity, there's a presumption that those devices are generating data for remote systems, and making use of that processed data.

This can be a great thing. Devices with integrated awareness of the global aggregate datasets have a great potential to benefit the owners, myself included. And I'm confident that when that data starts to be collected, it'll be in Hadoop clusters, code I've help author.

But we need to start thinking now about how to deliver an Internet of Things-that-benefit-the owner. If the connectivity and data analysis is designed to benefit someone else, then its gone from a utility to a threat.

It is critical that we make sure that the emergence of the "Internet of Things" does not become perceived as a threat to those of us who who own those things. If not, the vision and opportunity will not be realised. Which is why I'm starting to worry about my television -to the extent that not only am I not applying a new system update which includes a critical 'terms and conditions update", I'm thinking of composing a letter to the uk Information Commissioners Office on its topic.

This a 16 month old telly, one I first reviewed in 2013, where I implied its "smart" features were like AOL's or those 1998-era home PCs pre-cluttered with junk you didn't want.

Later on last year, it turned out that LG Smart TVs were discreetly uploading terrestrial TV watching data, along with USB filenames, and that it's privacy policy considered such viewing data anonymous.

LG got some bad press there, which they are reacting to with that new upgrade -disabling smart TV features until you agree to its new policy. Apply the system upgrade and iPlayer gets disabled until you consent to the new policy -one that pretty much enumerates any possible way to extract information about the user short of videoing everything you do.

Here are some points in the new and improved privacy policy which grab my attention,.

accept it or the device doesn't work: "in order to gain access to the full range of Smart TV services, you must agree to our Privacy Policy, which facilitates a greater exchange of information between your LG Smart TV and our systems.". That's for access to BBC iPlayer and NetFlix -third party services.

Viewing information includes data from HDMI devices: Viewing Information may include the name of the channel or program watched, requests to view content, the terms you use to search for content, details of actions taken while viewing (e.g., play, stop, pause, etc.), the duration that content was watched, input method (RF, Component, HDMI) and search queries.

You opt out of all EU data protection: "By agreeing to this Privacy Policy you expressly consent to us and our business associates and suppliers processing your data in any jurisdiction"

The section I really want to call out is the paragraph on "Protecting the Privacy of Children"

Protecting the privacy of children is important to us. For that reason, none of our Smart TV services are directed at anyone under 13 and they are not structured specifically to attract anyone under 13. We also do not knowingly collect or maintain personal information from users who are under 13. Should we learn or be notified that we have collected information from users under the age of 13, we will promptly delete such personal information.


This is something that nobody could say with a straight face. The privacy policy states that it has the right to monitor all terrestrial TV viewing -including CBBC, CBeebies and other kids channels- and push that upstream. The "viewing information" also includes information on external inputs, so perhaps even playing content from a games console is monitored.


Actual use case


This is clearly an illegal use of a television: two children are trying to play a game on it. Which -for better or worse- children do. And yet it is now something that LG are pretending their "smart TV is not for"

Someone should tell the marketing department that children can't use Smart TVs, as their UK site says otherwise, with that phrase "LG Smart TV's Game World provides family entertainment.". Maybe they mean " families where all the kids are over 13".  Except also on that page,"Enjoy hours of free 3D content including documentaries, sports, kids and music concerts and rent the latest 3D Disney movies exclusively with LG Smart TV.". We have actually operated a say-no-to-Disney policy for 12 years, but I believe that they do target children under the age of thirteen.

Any assertion that the TVs advanced features aren't there for use by children aged twelve and under are bogus -and the site and marketing shows this. The fact that providers like Netflix and iPlayer have kids content shows they are targeting children. If LG didn't want kids to use that content, they'd have approached the organisations and said "leave the kids content out on our machines"

So what now? 

1. I'm not applying the update, so haven't accepted the T&C changes. I wonder what's going to happen there? Is everything suddenly going to stop working in a big server-side switch, or will I just be assumed to have accepted -and my data will be collected as if I have agreed.

2. I'm debating contacting LG to say "a twelve year old uses our television, please stop collecting data on it". This would really put them on the spot to see how they react.

3. I'm not happy about the data-out-of-EU policy. I know web sites -remote servers- have such a policy. But can consumer goods, bought at a shop down the road have a set of T&Cs that say to work all EU data protection laws have to be discarded?

What happens on these smart TVs is important -it's an example of how a traditional offline  consumer device is being wired up to the rest of the net -and we need to define now what everyone's expectations should be. We consumers should expect to be the owners of the machines -and in control of the data. Vendors of the devices have opportunities to make great uses of the data -but they have to do it in a way that bring tangible benefit. Better advertising placement on a TV you've bought isn't such a benefit -at least to me. And if it is -why isn't it on the product web pages?

LG are being leading edge here -but right now they are almost becoming a "what not to do" story. And with every update to their firmware, things only get worse

2014-05-04

Fundamental Flaws in Android's "contacts-includes-search" feature

One under-reported aspect of the Android 4.4 release is that it adds -by default apparently- searches inside your contacts.
"why not mix searches with your contacts"

That is, whereas before you had a list of people you wanted to phone, which you could search and then call. now it can also include "nearby places that include your query"

A key point is being missed here. When I start typing in a name of someone in the contacts list, I am not trying to "query" something, just do a lookup of people in small table. And if there isn't a match there, it is not that I want the search broadened -it is that I have mistyped someones name and would like some fuzzy matching.

What do I get then, if I mistype "andes" instead of the surname of a friend, "anders"
because random locations are exactly what you want

I don't get a "showing results for anders" dialog the way you get if you misspell something in google search.

No, I get a hostel and a set of apartments in Santiago, Chile, and a mountain sports shop in New Hampshire.

None of these are nearby by any definition of "nearby" except that used by astronomers, where "inner solar system'" is considered close.

Maybe, just maybe, if the "search nearby places" feature did actually search nearby then it could be useful, though as there are so many other search bars in an android phone (top of the front page, in maps, in chrome) that I have never stared at the phone thinking "I wonder if there is a way to look up things on the web from here?" I serious doubt that.

But there is no way that I can conceivably consider returning two hotels in south america and a US shop as a nearby search, it is no use whatsoever.

I find it hard to conceive what developer team managed to come up with a search threshold where the cutoff for "not nearby" appears to be somewhere outside the orbit of the moon. No doubt there's some bug where the fact that java byte is and so setting one to 180 degrees would result in the number being chopped down, but really, couldn't the QA write a test that entered a key term not in contacts -and fail the test if the top three results included one or more entry on a different continent, or even hemisphere.

2014-01-15

Greylisting - like blacklisting only more forgiving

How not to fix a car



Reading the paper  The φ Accrual Failure Detector has made me realise something that I should have recognised before: blacklisting errant nodes it too harsh -we should be assigning a score to them and then ordering them based on perceived reliability, rather than a simple reliable/unreliable flag.

In particular: the smaller the cluster, the more you have to make do with unreliable nodes. It doesn't matter if your car is unreliable, if it is all you have. You will use it, even if it means you end up trying to tape up an exhaust in a car park in Snowdonia, holding the part in place with a lead acid battery mistakenly placed on its side.

Similarly, on a 3-node cluster, if you want three region servers on different nodes, you have to accept that they all get in, even if sometimes unreliable.

This changes how you view cluster failures. We should track the total failures over time, and some weighted moving average of recent failures -the latter to give us a score of unreliability, giving us a reliability score of 1-reliability, assuming I can normalise unreliability to a floating point value in the range 0-1.

When specifically requesting nodes, we only ask for those with a recent reliability over a threshold; when we get them back we first sort for reliability and try to allocate all role instances to the most reliable nodes (sometimes YARN gives you more allocations than you asked for). We may have some allocations on nodes > the reliability threshold.
That threshold will depend on cluster size -we need to tune that based on the cluster size provided by the RM (issue: does it return current cluster size or maximum cluster size).

What to do with allocations above the threshold?
options
  1. discard them, ask for a new instance immediately: high risk of receiving the old one again
  2. discard them, wait, then ask for a new instance: lower risk.
  3. ask for a new instance before discarding the old one the soonest of (when the new allocation comes in, some time period after making the request). This probably has the lowest risk precisely because if there is capacity in the cluster we can't get that old container, we'll get a new one on an arbitrary node. If there isn't capacity, when we release the container some time period after making the request, we get it back again. That delayed release is critical to ensuring we get something back if there is no space.
What to do if we get the same host back again? Maybe just take what we are given, especially in case #3 and we know that the container was released after a timeout. It'll be above the threshold, but let's see what happens -it may just be that now it works (Some other service blocking a port has finished, etc). And if not, it gets marked as more unreliable.

If we do start off giving all nodes a reliability of under 100%, then we can even distinguish "unknown" from "known good" and "known unreliable". This gives applications a power they don't have today -a way to not trust the as-yet-unknown parts of a cluster

 If using this for HDD monitoring, I'd certainly want to consider brand new disks as less than 100% reliable at first, and try to avoid storing data in >1 drive below a specific reliability threshold, though that just makes block placement even more complex


I like this design --I just the need the relevant equations

2014-01-06

Hoya as an architecture for YARN apps

Sepr@Bearpit

Someone -and I won't name them- commented on my proposal for a Hadoop Summit EU talk, Secrets of YARN development: "I am reading YARN source code for the last few days now and am curious to get your thoughts on this topic - as I think HOYA is a bad example (sorry!) and even the DistributedShell is not making any sense."

My response: I don't believe that DShell is a good reference architecture for a YARN app. It sticks all the logic for the AM into the service class itself, doesn't do much on failures, avoids the whole topics of RPC and security. It introduces the concepts but if you start with it and evolve it, you end up with a messy codebase that is hard to test -and you are left delving into the MR code to work out how to deal with YARN RM security tokens, RPC service setup, and other details that you'd need to know in production

Whereas Hoya
  • Embraces the service model as the glue to building a more complex application. Shows my SmartFrog experience in building workflows and apps from service aggregation.
  • Completely splits the model of the YARN app from the YARN-integration layer, producing a model-controller design. Where the model can be tested independently of YARN itself.
  • Provides a mock YARN runtime to test some aspects of the system --failures, placement history, best-effort placement-history reload after unplanned AM failures --and lays the way for simulating the model can handle 1000+ clusters.
  • Contains a test suite that even kills HBase masters and Region Servers to verify that the system recovers.
  • Implements the secure RPC stuff that Dshell doesn't and which isn't documented anywhere that I could find.
  • Bundles itself up into a tarball with a launcher script -it does not rely on Hadoop or YARN being installed on the client machine.
So yes, I do think Hoya is a good example

Where it is weak is
  1. It's now got too sophisticated for an intro to YARN.
  2. I made the mistake of using protobuf for RPC which is needless complexity and pain. Unless you really, really want interop and waste a couple of days implementing marshalling code I'd stick to the classic Hadoop RPC. Or look at Thrift.
  3. I need to revisit and cleanup of bits of the client side provider/template setup logic.
  4. We need to implement anti-affinity by rejecting multiple assignments to the same host for non-affine roles.
  5. It's pure AM-side, starting HBase or Accumulo on the remote containers, but doesn't try hooking the containers up to the AM for any kind of IPC.
  6. We need to improve its failure handling with more exponential backoff, moving average blacklisting and some other details. This is really fascinating, and as Andrew Purtell pointed me at phi-accrual failure detection, is clearly an opportunity to some interesting work.
I'd actually like to pull out the mock YARN stuff out for re-use --same for any blacklisting code written for long-lived apps.

I also filed a JIRA "rework DShell to be a good reference design", which means implement the MVC split and add a secure RPC service API to cover that topic.

Otherwise: have a look at the twill project in incubation. If someone is going to start writing a YARN app, I'd say: start there. 

2014-01-01

My policy on open source surveys: ask the infrastructure, not the people

An email trickling into my inbox reminds me to repeat my existing stance on requests to complete surveys about open source software development: I don't do them.

chairlift

The availability of the email address of developers in OSS projects may make people think  that they could gain some insight by asking those developers questions as part of some research project, but consider this
  1. You won't be the first person to have thought of this -and tried to conduct a survey.
  2. The only people answering your survey will be people who either enjoy filling in surveys, or who haven't been approached, repeatedly before.
  3. Therefore your sample set will be utterly unrealistic, consisting of people new to open source (and not yet bored of completing surveys), or who like filling in surveys.
  4. Accordingly any conclusions you come to could be discounted based on the unrepresentative, self-selecting sample set.
The way to innovate in understanding open source projects -and so to generate defensible results-  is to ask the infrastructure: the SCM tools, the mailing list logs, the JIRA/bugzilla issue trackers. There are APIs for all of this.

Here then are some better ideas than yet-another-surveymonkey email to get answers whose significance can be disputed:
  1. Look at the patch history for a project and identify the bodies of code with the highest rate of change -and the lowest. Why the differences? Is the code with the highest velocity the most unreliable, or merely the most important?
  2. Look at the stack traces in the bug reports. Do they correlate with the modules in (1)?
  3. Does the frequency of stack traces against a source module increase after the patch to that area ships? or does it decrease? That is, do patches actually reduce the #of defects, or as Brooks said in the Mythical Man Month, simply move around. 
  4. Perform automated complexity analysis  on source. Are the most complex bits the least reliable? What is their code velocity?
  5. Is the amount of a discussion on a patch related to the complexity of the destination or the code in the patch?
  6. Does that complexity of a project increase of decrease over time?
  7. Does the code coverage of a project increase or decrease over time?
See? Lots of things you could do -by asking the machines. This is the data-science way, not asking surveys against a partially-self-selecting set of subjects and hoping that it is in some way representative of the majority of open source software projects and developers.

[photo: ski lifts in the cloud, Austria, december 2013]

2013-11-20

Television Viewing & the Deanonymization of Large Sparse Datasets.


[preamble: this is not me writing against collecting data analysing user behaviour, including Tv viewing actions. I cherish the fact that Netflix recommends different things to different family members, and I'm happy for the iPlayer team to get some generic use data and recognise that nobody actually wants to watch Graham Norton purely from the way that all viewers stop watching before the introductory credits are over. What is important here is that I get things in exchange: suggestions, content. What appears to be going on here is that a device I bought is sending details on TV watching activity so as to better place adverts on a a bit of the screen I paid for, possibly in future even interstitially during the startup of a service like Netflix or iPlayer. I don't appear to have got anything in exchange, and nobody asked me if I wanted the adverts let alone the collection of the details of myself and my family, including an 11 year old child.]

Graham Norton on iPlayer


Just after Christmas I wandered down to Richer Sounds and bought a new TV, first one in a decade, probably second TV we've owned since the late 1980s. My goal was a large monitor with support for free to air DTV and HD DTV, along with the HDMI and RGB ports to plug in useful things, including a (new) PS3 which would run iPlayer and Netflix. I ended up getting a deeply discounted LG Smart TV as the "smart" bits came with the monitor that I wanted.

I covered the experience back in March, where I stated that I felt that smart bit was AOL-like in its collection of icons of things I didn't want and couldn't delete, it's dumbed down versions of Netflix and iPlayer, and its unwanted adverts in the corner. But that's it, the netflix tablet/TV integration compensates for the weak TV interface, and avoids the problem of PS3 access time limits on school nights, as the PS3 can stay hidden until weekends.

Untitled

Last week I finally acceded to the TV's "new update available" popups, after which came the "reboot your TV" message. Which I did, to then get told that I had to accept an updated privacy policy. I started to look at this, but after screen 4 of 20+ gave up, mentioning it briefly on that social networking stuff (who give me things like Elephant-Bird in exchange for their logging my volunteered access -access where I turn off location notification in all devices).

I did later regret not capturing that entire privacy policy by camera, and tried to see if I could find it on line, but at the time, the search term "LG SmartTV privacy policy" returned next to nothing apart from a really good policy for the LG UK web site, which even goes into the detail of identifying each cookie and its role. I couldn't see the policy after a quick perusal of the TV menus, so that was it.

Only a few days later, Libby Miller pointed me at an article by DoctorBeet, who'd spun wireshark up to listen to what the TV was saying, and so showing how his LG TV is doing an HTTP forms  POST to a remote site of every channel change, as well as details on filenames in USB sticks.

This is a pretty serious change on what a normal television does. DoctorBeet went further and looked at why. Primarily it appears to be for advert placement, including in that corner of the "smart" portal, or a start time after you select "premium" content like iPlayer or netflix. I haven't seen that which is good -an extra 1.5MB download for an advert I'd have to stare through is not something I'd have been happy with.

Anyway, go look at his article, or even a captured request.

I'm thinking of setting up wireshark to do the same for an evening. I made an attempt yesterday but as the TV is CAT-5 to a 1Gbs hub, then an ether over power bridge to get into the base station, it's harder than I'd thought. My entire wired network is on switched ports so I can't packet sniff, and the 100 MB/s hub I dredged up from the loft turned out to be switched too. That means I'd have to do something innovative like use the WEP-only 802.11b ether to wifi bridge I also found in that box, hooked up to an open wifi base station plugged into the real router. Maybe at the weekend. A couple of days logs would actually be an interesting dataset even if it just logs PS3 activity hours as time-on-HDMI-port-1

What I did do is go to the "opt out of adverts" settings page DoctorBeet had found, scrolled down and eventually followed some legal info link to get back to the privacy settings. Which I did photo this time, and which are now up on Flickr.

Some key points of this policy

Information considered to be non personally identifiable include MAC addresses and "information about the live content you are watching"



LG Smart TV Privacy Policy


That's an interesting concept, which I will get back to. for now. note that that specific phrase is not indexed anywhere into BigTable, implying it is not published anywhere that google can index it.
Phrase not found: "information about the live content you are watching"

Or "until you sit through every page with a camera this policy doesn't get out much"

If you have issues, don't use the television

LG Smart TV Privacy Policy

That's at least consistent with customer support.

Anyway. there's a lot more slides. One of them gives a contact, who when  you tap in to LinkedIn not only shows that he's the head of legal at LGE UK,  that he's one hop away from me: datamining in action.

Now, returning to a key point: Is TV channel data Non-personal information?

Alternatively: If I had the TV viewing data of a large proportion of a country, how would I deanonymize it?

The answer there is straightforward, I'd use the work of [2004 Arvind Narayanan and Vitaly Shmatikov], Robust De-anonymization of Large Sparse Datasets.

In that seminal paper, Narayanan and Shmatikov took the anonymized Netflix dataset of (viewers->(movies, rankings)+), and deanonymized it by comparing film reviews on Netflix with IMDb reviews, looking for reviews that appeared on IMDb shortly after a Netflix review with ratings matching/close to that a Netflix review. They then took the sequence of a viewers' watched movies and looked to see if a large set of their Netflix review met that match critera. At the end of which they managed to deanonymize some Netflix viewers -correlating them with an IMDb reviewer may standard deviations out from from any other candidate. They could then use this  match to identify those movies which the viewer had seen and yet not reviewed on IMDb.

The authors had some advantages, both netflix and IMDb had reviews, albeit on a different scale. the TV details don't so the process would be more ad-hoc

  1. Discard all events that aren't movies
  2. Assume that anything where the user comes in late to some threshold isn't a significant "watch event" and discard.
  3. Assume that anything where the user watches all the way to the end is a significant "watch event" and may be reviewed later.
  4. Assume that watching events where the viewer changes channel some distance into a movie -say 20 min- as a significant watch failure event, which may be reviewed negatively.
  5. Consider watch events where the user was on the same channel for some time before the movie began as less significant than when they tuned in early.
  6. If information is collected when a user explicitly records a movie, a "recording event", that is treated even more significantly.
  7. Go through the IMDb data looking for any reviews appearing a short time after a significant set of watch events, expecting higher ratings from significant watch events and recording events, and potentially low ratings from a significant watch failure.

I don't know how many matches you'd get here -as the paper shows, it's the real outliers you find, especially the watchers of obscure content.

Even so, the fact that it is would to possible to identify at least one viewer this way shows that TV watching data is personal information. And I'm confident that it can be done, based on the maths and the specific example in the Robust De-anonymization of Large Sparse Datasets paper.

Conclusion: irrespective of the cookie debate, TV watching data may be personal -so the entire dataset of individual users must be treated this way, with all the restrictions on EU use of personal data, and the rights of those of us with a television.