Today's Smart TVs: AOL for the living room

I was pleased to hear that Palm had been sold by HP to someone who may care about it: it will have a life beyond the grave. It's not that bad a platform: Linux underneath, HTML + JavaScript on top, with things like Node.js for the threading library. It shows that you don't need a new programming paradigm (iOS, Android) to write applications for mobile devices, just HTML + JS + device service access.

This is effectively what Chromebooks are trying to provide, as well as some of the features of HTML5. It'll be interesting to see how much resistance that gets from the phone manufacturers. I expect google to be happy, Apple: more reluctant.

And the TV vendors? They've clearly decided that now that television screen diameters have reached their sensible limits for most households, and recognised that 3D as a feature has died, they need a new way to convince everyone to renew their televisions on a three year cycle, and to charge a premium for those new televisions.

Well, a 3-year cycle is the replacement cycle for desktops and games consoles, probably longer than the lifespan of today's phones and tablets, which are on a faster evolutionary curve. The TV vendors must look at the lifespan of tables, and think "we'd like that".

The challenge, then is simple: convincing customers that their existing television is already obsolete, and that that they need a new television (that will also be rapidly obsolete, though that isn't made clear). They also want to charge margins above basic "monitor".

The SmartTV, then represents their strategy. Rather than let the Games consoles evolve to be general purpose Entertainment Consoles, the TV vendors want that money. They're probably realistic to recognise that they can't get in to the existing console business model: a few big games a year, but they will look at phone/tablet app store purchases and think "$10 per app works out if the #of apps increases". Which is of course something that the games console vendors have noticed and are trying to adapt to.

The TV vendors are also unwilling to let anyone else get a toehold. Google TV never took off, they would run from Microsoft making a similar offering. That's a short term strategy which would be killed if Apple were to produce a TV that took the premium market. They must hear the rumours and think "we need a story of our own" -the LG purchase of Palm represents that.

I can see their thinking, but think they will have to change what they deliver to customers in terms of UX to stand a chance against Apple.

This an opinion based on having owned an LG "Smart TV" since January. It is not a Smart TV: it is a monitor with aspirations to be AOL.

Actual use case

The driver for retiring out nearly-ten-year old CRT television was the sprog's acquisition of a PS3 for his birthday: finally we had HD content to display. Getting a new television was now justifiable.

My requirements were: LED TV good for DVD, Blueray and Games, Freeview HD. The right size  for a large, high-ceilinged room that didn't dominate the room; lots of HDMI ports, RGB in. 3D was something that games could take advantage of, so that was on the list if it didn't add too much money. "Smart TV" wasn't something I cared about, as the PS3 was where iPlayer and Netflix would run. I made sure we picked up a "PS3 slim" not the more recent "super slim", for a better blueray loading experence.

The day after Xmas, then, I walked down to Richer Sounds to get a TV to match my requirements, having already sized things up (very nice panasonic iPad app there to simulate a TV on the wall), and explored the options. The TV we got was a 47" LG LED panel, lots of HDMI ports, and at a price point which I was prepared to pay for a TV that I expect to retain for another 6-10 years.

The fact that it was an internet ready SmartTV was a non-issue; I hadn't even intended to wire that bit up to Ether.

We ended up rolling the AV receiver to one with HDMI switching (the old one will move into my office for its sound system) -the new receiver had Airplay from Ether, so the TV zone ended up getting a 4x1GbE ether switch hooked up to the Ether over Power backplane I've been running for a while.

As a result, I can now experience SmartTVs in all its glory.

Like I said, it reminds me of AOL. And perhaps a Windows 98 PC in 1997, when all the dotcom startups were paying the home PC vendors $20 just for an icon on the desktop or a bookmark in IE4:

AOL-class UI

The left third shows live input (top half) and some notification about new content and a product advert (bottom half). That's an advert on a television I paid for, one I can't disable. Using my internet.

That's the AOL feature.

The central third is the "premium" services, which includes "all possible premium services", not "the only ones you are signed up to". It has the three we use: iPlayer (free playback of most BBC TV and radio content of the previous 7 days), Netflix and youtube. The others: I'm not going to sign up for them, yet they are permanently there, taking up space and delivering no value to me.

I suspect that the vendors may give LG a kickback if someone signs up through the TV.

Moving right, there's some other pane, and more off to the right, none of which anyone can be bothered to explore.

What I do see right at the end is the option to create my own "my apps" pane. I was glad to find this, confident I could now set up the TV with the things I wanted, rather than have the services I wanted hidden in the clutter.

Limited customisation

Except: you can't add "premium" services to "my card". They aren't on the list of selectable services.

There must be some separate array of "premium services" from "standard services", with only the standard services being configurable. Two separate arrays, two ways to keep them up to date. Separate tests.

Having to make do with my no-quite-my-card, I can now move it onto the mainscreen and get some of the clutter out the way
no, you can't turn the adverts of

Though again, no ability to move it left of the premium card. That's fixed, with a message at the bottom "cannot move live card and premium card". Someone has gone to the effort of fixing the minimum position of all customisable cards to panels[x] where x>=2, written the tests for it, i18n'd he "cannot move" message.

There we have it then: A UI that takes up 1/6 of the screen space with adverts, clutters up the main screen with that and a pool of premium services that nobody would have more than half of, and which doesn't let me clean up.

In comparison, Apple's "we control your tablet" philosophy is a bubble of flexibility, as I can choose whatever is on the start screen and on the app bar at the bottom. Not in LG "SmartTV" land.
Graham Norton on iPlayer

As the for the applications, they work, iPlayer will happily stream Graham Norton down in HD, which is something I personally consider a defect. You can also mark it as a favourite, which I consider a defect in an individual.

Even so, the viewer isn't as good as the PS3 options. iPlayer's scroll forwards/backwards is very crude, accurate to about 5 minutes, rather than the 30s or so that the PS3 version offers. It's got pretty bad latency in some of the navigation features, implying there's not much caching going on -memory limited?

As for Netflix: you can't add ratings to get better recommendations, you don't get the ability to see the "similar to" recommendations on any film. It's a worse UI than on an iPad.

Which raises a key issue which LG and all the SmartTV vendors have: convincing anyone to code for their devices.

This is the problem that phones have had, which Apple solved by "having massive market share in markets they effectively created, and providing a good user experience for their users, especially if they have a laptop, tablet and phone all from Apple". Google have allowed the other phone and PC vendors to play catch-up through Android,

Phone and tablet developers then, have a small set of options.
  1. Apple. Essential if you do tablet work, important if you do phones. Their own programming language and tooling, oppressive qualification process -offering users trustable apps when they've finished. What's nice about apple: a minimal number of platforms to test on, and with new OS releases backported, no reason not to adopt the latest features.
  2. Android. The other app platform: Java language and compatible runtime; open to all vendors, though customers get different backport experiences based on phone vendors. For developers: a lot more testing, and you have to worry about which OS versions are in use in the field. Support calls are probably worse. In favour though: one core codebase for all Android phones.
  3. HTML5. Viable if you are targeting an on-line only world, though phone support here has been weak (cite: facebook's move from HTML5 to apps).  
  4. There's also Windows Mobile, which may be too late as an app platform, and will have to focus on delivering an excellent HTML5 experience.
How is any one SmartTV vendor going to play here if these platforms move into the TV world? Either they talk to MS or Google and say "we can't do platforms, help us". Or they say HTML5 is all we need, and work to deliver a really good HTML5 experience (that DRM in HTML5 may help or hinder here. Help: let Netflix and those traitors to openness at the BBC deliver apps, hinder if they can't actually get the closed codec/auth modules).

I don't see LG's acquisition of Palm being sufficient to stop them being forced to copy the phone/tablet strategies.
  • If Apple comes to play, they can take advantage of their tablets, phones and iPod touches, make these the personal GUI for the TV, recognise that multiple people in front of the TV will have them, and provide an app platform that lets developers write apps that not only can work on tablets, phones and TVs -but can even work between them. NetFlix does some of that already -their tablet/phone apps can tell the PS3 and the TV to play content, which helps compensate for some of the limitations of the TV app.
  • Google can come to the other vendors and say "here's a way out". Samsung are already making Android phones and tablets -I'd expect them to go with Google. Sony have android phones too, but they have the PS4 to work on too -and presumably see that as more strategic than smart TVs.
  • LG? Palm? They don't have the market share. Unless they can get together with the other TV vendors and say "here's an independent strategy" -and have them listen.
Irrespective of what strategy today's TV vendors take, one thing they have to recognise is that their AOL-class GUI isn't going to cut it. If in-TV applications take off, the quality of the UX is going to matter, and right now, they are 15 years behind what PCs, Phones and tablets are offering.


Defeated by iPad synchronization options

Last January I got an iPad mini as a travel accessory to the laptop: music, eBooks, PDF formatted papers, online and offline maps, etc.

CCTV on Gloucester Road

It's also intended to be holder of travel paperwork: the schedule, logistics notes, eTickets, hotel details. All mostly PDF, though my KLM check in has just emailed a GIF QF barcode which apparently will get me through security (outbound I'm testing w/ a backup paper one, return: commit to GIF).
CCTV on Gloucester Road
A major use case of mine then is: get PDFs off my laptop and into the iPad so that I can bring them up and view them.

Which is where it all seems to go horribly wrong.

I can see four different synchronization options.

Copy into the books section of iTunes and let them trickle over via USB or wifi.
this works, provided the devices can see each other in the same wifi subnet.

It is a bit clunky as I have to drag and drop content from my folder of travel bits (e.g. 2013-03-AMS) and store them in a flat pool of documents, where they end up mixed next to things like Grinstead and Cell's Introduction to Probability, papers on things like Chubby, and copies of Singletrack Magazine. This isn't ideal for navigating at the airport security gate.

But again: it works, and I know how to verify that the stuff has trickled over -you look at the sync status page.

  1. Clean up the last trips' documents.
  2. Copy in the new files
  3. Force a sync to make sure it is over.
  4. Updates: steps 2 and 3.
Gloucester Road Art

Apple iCloud

This is meant to be the future. Instead of saving to the filesystem, you save it to "the cloud" where it will magically make its way over to your other devices.

Except I put PDFs in there and there doesn't seem to be any obvious way to actually see that they have made it over, let alone open it.

This is not a Cloud, it is /dev/null with unrealistic promises. I could say that if I copy a file to /dev/null then all my other devices will get the same view of the copied documents -but if they aren't there, it's not a very good view.

The workflow for getting documents over via iCloud is therefore
  1. Save the files into iCloud.
  2. Pick one of the other synchronization options to get your content over.

45 rpm on Gloucester Road


The folder metaphor, I can drag and drop anything in on my desktop to it, it trickles over across all my desktops, OS/X and linux.

What it doesn't do is automatically trickle the files over to the iPad. It copies the directory metadata over, but I seem to have to tap every file -by hand- for it to decide that it's going to download every artifact in the filesystem.

For anyone with a default 2 GB Dropbox account, you could copy everything over while on wifi and not use up any device space, even for customers like me who went for the low-end 16GB model because they felt the cost/GB of extra SSD in an iPad was utterly excessive.

The workflow to sync is therefore
  1. Save all the content into a dropbox managed folder 
  2. go to the tablet and find that folder
  3. go to every file in it, and manually hit the download button, wait for it to D/L. Repeat for all files in a process that is O(files)*O(filesize).
  4. The update process is steps 1-3, repeated.

I also have a box account, and an iPad app for that. This has some flags about auto-syncing on wifi only, which I'm happy with, not having a device with a modem in (tethering & wifi usually suffice).

It also has -and this got me excited- the ability to mark folders as "favorite", where it is claimed that content will auto-sync to the pad. I was hopeful here, marked my folder travel as fave and then put stuff into trip-specific subdirs underneath, for this week's trip, next month's US trip, and others.

I go over to the 'pad, expecting the files to be there.

Only they aren't, because the favoured bit is not recursive.

Once you know that, Box sync becomes manageable
  1. Create a folder for each planned trip.
  2. Copy travel docs in there.
  3. Go to your table, and mark that folder as favourite, even if the parent dir is already marked as such.
  4. Update the travel folder on the laptop -things will now trickle over.
Remember step #3 and it does work.
CCTV on Gloucester Road

Just mail them to yourself the day before you travel, download the mail and make sure the files are there.

This appears to work, though there are probably limits on how big the files are that are auto-D/L'd, and the files go away at the history rate specified in the mail app, which is no good for a long trip.

  1. Make sure the mail app is set to cache data for >= the length of your trip.
  2. Email the PDFs to yourself.
  3. Verify in the mail app that they have all arrived.
The nice feature about this is that it works from everywhere, whether or not box is installed. You can also get other people to contribute to the document pool by having them email you direct.

There you have it: ways I've tried to sync documents.

CCTV on Gloucester Road

If iCloud actually seemed to do what is promised "share your content across devices via Apple's cloud" then it might work, even if it's metaphor "not a filesystem, but a place where every artefact is permanently bonded to whichever application put it there, even if there is >1 text or PDF viewer on a device", is so dumbed down it represents a step back to Mac 1.0.

Unfortunately the behaviour I see "the same consistency and durability model as copying files to /dev/null", means that there is no way I would trust it with anything. I actually hope there may be something obvious I'm missing here, as I can't understand how something so dire would spring into existence, and don't believe the business plan "make money from premium users who want to store more stuff" stands a chance against tools that actually work.

Instead I've settled on Box, making sure that things go over (opening a non-random sample of them -I should toss a coin over each file for better randomness.

Oh, and print out my boarding card and the map from the train station to the hotel.

[Photos: some of the CCTVs I saw on a single walk down Gloucester Road. I'm not quite sure what problems this high street had that needed near-ubiquitous CCTV coverage, but there's enough cameras to have fixed it. I like the one pointed straight at the ATM the best]


Enterprise Hadoop: yes, but how are you going to fix it?

EMC's Pivotal HD has started a lot of debate as to whether building on top of Hadoop can be considered being part of a Hadoop ecosystem or whether it's an attempt to co-opt it: to do something and claim that it is part of a bigger system.

Can you say you are "part of the Hadoop stack" when all you are doing a closed source layer on top? I think that's quite nuanced, and depends on what you do -and how it's interpreted.

see no evil 2012

  1. The Apache License grants everyone the freedom to take the source away and do anything they want with it
  2. There is no requirement for you to contribute a single line of code back; a single bug report. 
This is a difference between the ASF license and GPL-licensed software which you redistribute: with GPL code the changes must (somehow) be published. 

Other aspects of the ASF license:
  1. You can't abuse ASF brand names, which in the Apache Hadoop world means you can't use Apache Hadoop, Apache HBase, Apache Mahout, Giraph, Apache Pig, Apache Hive, etc in your product names. There are some excellent guidelines on this in the wiki page Defining Hadoop -and if you want actual feedback, email the trademarks@ list. It may seem that doing so removes the secrecy/surprise factor of your product announcement, but it's better that than a hurried renaming of all your product and documentation.
  2. If you sue other users of the product over patents of yours that you believe apply to the technology -you revoke your own right to the software. I haven't known that to happen with Apache products -though the Oracle/Google lawsuit did cover copyright of APIs and reimplementations thereof. If APIs ever became copyrightable, then decades of progress in the computing industry will grind to halt.
People are also free to look at Apache APIs and clean-room re-implement them; you just can't use the Apache product names at that point. Asserting compatibility becomes indefensible: if you look at the ASF JIRAs, even 100% compatibility across versions is hard to achieve -that's with the same source tree. It's not the binary signature that is (usually) the problem, its what happens afterwards that's trouble. Little things like whether renaming a file is atomic, or what happens when you ask for the block locations of a directory.

Now, what about introducing a closed source product on top of Hadoop and saying you are part of the hadoop ecosystem, that you have x-hundred people working on Hadoop?

This is where it gets tricky.

Some people say "it's like building on Linux" -and there are some very big closed applications that run on Linux. A big one that springs to mind is Oracle RDBMs.

Are the thousands of people who work on Oracle-on-Linux "working on Linux"? Are they working on "Oracle on Linux", or are they working "on Oracle", on Linux?

Whatever way you look at it, those people aren't working in the Linux OS, just on something that runs on top of it . Would you call it part of the Linux "stack", the way MySQL and Apache HTTPD are?

Personally: I have no idea.

see no evil 2012

What probably doesn't happen from Oracle's work is any direct feedback from their application into the OS. [Correction: it does, thx @tlipcon]. I also doubt that RedHat, Novell and others regression test Oracle RDBMS on their latest builds of Linux. By their very nature, closed-source applications fall out of the normal OSS regression and release test processes, that rely not only on the open source trees, but the open test suites. This is also why Oracle's actions in not releasing all tests for MySQL seems so short sighted: it may hurt MariaDB, but it also hinders Linux regression testing.

Breaking that link between the OS and the application means that Oracle have not been in the position to rapidly adapt to problems in the OS and filesystem, because there's no way to push their issues back upstream, to get changes in, to get new releases out in a hurry to fix a problem with their application or hardware. Instead the onus becomes on the application to deal with the problem themselves.

How have Oracle handled this? Eventually, by getting into the Linux Distribution business itself, with Oracle Unbreakable Linux. By releasing a complete OS build, they can coordinate OS and application releases, they can fix their version of the OS to handle problems that surface in Oracle's applications -on a timetable that works for them. They also get to handle Oracle hardware support in a timely manner, and charge support revenue from users.
That works -at a cost. By forking RedHat Linux, Oracle have taken on all the maintenance and testing costs themselves.

The amount that Oracles charge has to cover those costs, or the quality of the Oracle fork of Linux degrades relative to the reference points of RHEL and Debian.

For Oracle, or the combined OS+11g+exadata deal has enough margins in the database that they can come up with a price is was less than ({HP | Dell}-RHEL-Oracle11g), and so presumably those costs can be covered. What's not clear is this: did Oracle get into the business of selling a supported Linux because they saw money in it, or because they concluded that their hardware and database products effectively mandated it?

Other companies getting into the business of redistributing Hadoop-derived products to customers who are paying those companies in the expectation of support are going to have start thinking about this.

If you have just sold something that has some Hadoop JARs in it -code that the customer depends on- and they have a problem, how are you going to fix it?

Here are some strategies:
  1. Hope it won't be a problem. Take the Apache artifacts, ship as is. It is, in the opinions of myself and my Hortonworks colleagues, production ready. Push customers with problems to issues.apache.org, forward them yourself. You could do the same with CDH, which in the opinions of my friends at Cloudera, also production ready.Do that, and issues on Apache JIRA will be ignored unless you can replicate them on the ASF artefacts.
  2. Build your own expertise: this takes time, and while that happens you aren't in a position to field support calls. If you make your own releases, you end up needing your own test infrastructure, QA'ing it, and tracking the changes in hadoop trunk and branch-1.
  3. Partner with the experts: work with people who have in-depth understanding of the code, it's history, why decisions were made and experience in cutting production scale releases suitable for use in web companies and enterprises. That means Hortonworks and Cloudera. Many of the enterprise vendors do this, because they've realised it was the best option.
The web companies, the early adopters went for #1 and ended up with #2: build your own expertise. This is effectively what I did in my HPLabs work on dynamic in-cloud Hadoop. You can see my journeys through the source -while working on big things, little things crop up, especially problems related to networking in a virtual world, configuration in a dynamically configured space, and recovery/sync problems that my service model discovered. I still know my way through a fraction of the code, but every project I work on builds up my understanding, and contributes stuff back to the core, including things like better specifications of the filesystem API's semantics, and the tests to go with it

That trail of JIRAs related to my work shows up something else: if you are delving deep into Hadoop, your reading of the code alone should be enough to get you filing bugs against minor issues, niggles, potential synchronization, cleanup or robustness problems. If you are pushing the envelope in what Hadoop can do: bigger issues.

We are starting to see some involvement in hadoop-core from Intel, though apart from the encryption contribs, it still appears to be at an initial state -though Andrew Purtell has long been busy in HBase. We do see a lot activity from Junping Du of VMWare -not just the topology work, but other big virtualisation features, and the day-to-day niggles and test problems you get working with trunk. Conclusion: at least one person in VMWare is full time on Hadoop. Which is great: the more bugs that get reported, the more patches, the better Hadoop becomes. Participating in the core code development project develops your expertise while ensuring that the Apache (hence Hortonworks and Cloudera) artifacts meet your needs.

Are there other contributors from EMC? Intel? I have no idea. You can't tell from gmail & ymail addresses alone; you'd have to deanonymize them by going via LinkedIn. That's not just name match; you can use the LI "find your contacts" scanner to go through those people's email addresses and reverse lookup their names. Same for twitter. I may just do that for a nice little article on "practical deanonymization".

In the meantime, whenever someone comes to you with a product containing the Apache Hadoop stack, say "if there is a problem in the Hadoop JARs - how are you going to fix it?"

[Artwork: See no evil by Inkie, co-organiser of the See No Evil event. Clearly painted with the aid of a cherry picker]


Hadoop, Java7 and OSX or "what is it about JAVA_HOME"

The imminent End of Life for Java6 prompted me to move my OSX laptop up from Apple Java6 to Oracle Java7.

I've just conclude that this was a mistake and am trying to roll back.

Summary: because Oracle have managed to move things around in the way the build tools don't expect, you should stay on Java6

V is for Victory!

Key points
  1. Apple will no longer support Java on OSX themselves; they are delegating this to Oracle.
  2. Oracle stick things up in different place
  3. Some of the JARs appear to have changed too
  4. This breaks the classpath that Hadoop uses to generate its annotations
  5. It also breaks the jspc compiler
Where things moved
Before: /System/Library/Frameworks/JavaVM.framework/Versions contained links to a JDKs found at /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents
After: /System/Library/Frameworks/JavaVM.framework/Versions doesn't get updated any more; things live in /Library/Java/JavaVirtualMachines/

Choosing which JDK/Java version to run

The Applications/Utilities/Java Preferences applet gets removed, and replaced by something in System Preferences -something that doesn't seem to pick up any of the existing JDKs, and so only gives you the option of choosing a java7+ version.

You can reinstall that applet, and so get hold of the full option set.

However, that doesn't seem to trickle down to /usr/libexec/java_home, which still defaults to 1.7 (at least for me)

Fixing JSPC

see HADOOP-9350 for a workaround

Fixing Hadoop-annotations to compile

There were lots of errors about com.sun.javadoc imports failing.

That's clearly a classpath problem.

At this point I gave up, fixing my .profile to pick up

export JAVA_HOME=`/usr/libexec/java_home  -v 1.6`

this sets it to

This is close enough to a functional JDK6 that things would build and test again.

One of the recurrent pains of all Java based console apps is finding the JDK location. It's in some directory with spaces on Windows; in different places on Linux on ubuntu and RHEL, and now this: Oracle coming up with a layout on OSX that may make sense for them, but seems to break builds downstream.

It shouldn't have to be this complex. Every OS could have some standard locations where things end up -and these would be constant over time. It doesn't matter whether the old Apple layout here was wrong, and the new Oracle filesystem layout was "better", the issue we have here is that they are different, incompatible, and harder to switch between than you'd expect.

This leads to a followon problem: I'm not prepared to waste any more time upgrading to Java7 on my laptop for some months, even though it isn't getting security updates any more.

The changes to the Java installation on  OS/X provide a disincentive for Java developers to upgrade, which ensures that not only are those machines vulnerable security-wise (if they are naive enough not to disable the java plugin), but means that those developers aren't coding for Java7 -or even testing on it.

Macs may be minuscule compared to the Windows enterprise development world, never a place that server-side code runs, but are now a significant fraction of the Open Source developer community -at any of the ApacheCon events, Windows laptops are in the minority, this in a place where a Linux laptop doesn't raise an eyebrow.

By doing things that seem designed to make developing Java7 code on Macs near-impossible, the conclusion is that either Oracle don't want people to code on Macs, or they don't want Java7 code. Well, that's what they are going to get -at least from me.

[Photo: Anonymous, Easton]