Here, Hardknott Pass falls into the challenge category —at least in summertime. You know you'll get up, the only question is "cycling" or "walking".
Hardknott in Winter is a different game, its a "should I be trying to get up here at all" kind of issue. Where, for reference, the answer is usually: no. Find another way around.
Upgrading dependencies to Hadoop jitters between the two, depending on what upgrade is being proposed.
And, as the nominal assignee of HADOOP-9991, "upgrade dependencies", I get to see this.
We regularly get people submitting one line patches "upgrade your dependency so you can work with my project' —and they are such tiny diffs people think "what a simple patch, it's easy to apply"
The problem is they are one line patches that can lead to the HBase, Hive or Spark people cornering you and saying things like "why do you make my life so hard?"
Before making a leap to Java 9, we're trapped whatever we do. Upgrade: things downstream break. Don' t upgrade, things downstream break when they update something else, or pull in a dependency which has itself updated.
While Hadoop has been fairly good at keeping its own services stable, where it causes problems is in applications that pull in the Hadoop classpath for their own purposes: HBase, Hive, Accumulo, Spark, Flink, ...
Here's my personal view on the risk factor of various updates.
We know things will be trouble —and upgrades are full cross-project epics
- protobuf., This will probably never be updated during the lifespan of Hadoop 2, given how google broke its ability to link to previously generated code.
- Guava. Google cut things. Hadoop ships with Guava 11 but has moved off all deleted classes so runs happily against Guava 16+. I think it should be time just to move up, on the basis of Java 8 compatibility alone.
- Jackson. The last time we updated, everything worked in Hadoop, but broke HBase. This makes everyone very said
- In Hive and Spark: Kryo. Hadoop core avoids that problem; I did suggest adding it purely for the pain it would cause the Hive team (HADOOP-12281) —they knew it wasn't serious but as you can see, others got a bit worried. I suspect it was experience with my other POM patches that made them worry.
Failures are traumatic enough we're just scared of upgrading unless there's a good reason.
- jetty/servlets. Jetty has been painful (threads in the Datanodes to peform liveness monitoring of Jetty is an example of workarounds), but it was a known and managed problem). Plan is to move off jetty entirely and -> jersey + grizzly.
- Servlet API.
- jersey. HADOOP-9613 shows how hard that's been
- Tomcat. Part of the big webapp set
- Netty —again, a long standing sore point (HADOOP-12928, HADOOP-12927)
- httpclient. There's a plan to move off Httpclient completely, stalled on hadoop-openstack. I'd estimate 2-3 days there, more testing than anything else. Removing a dependency entirely frees downstream projects from having to worry about the version Hadoop comes with.
- Anything which has JNI bindings. Examples: leveldb, the codecs
- Java. Areas of trauma: Kerberos, java.net, SASL,
With the move of trunk to Java 8, those servlet/webapp versions all need to be rolled.
These are things where we have to be very cautious about upgrading, either because of a history of brittleness, or because failures would be traumatic
- Jets3t. Every upgrade of jets3t moved the bugs. It's effectively frozen as "trouble, but a stable trouble", with S3a being the future
- Curator 2.x ( see HADOOP-11612 ; HADOOP-11102) I had to do a test rebuild of curator 2.7 with guava downgraded to Hadoop's version to be confident that there were no codepaths that would fail. That doesn't mean I'm excited by Curator 3, as it's an unknown.
- Maven itself
Zookeeper -for its use of guava.
Generally happy to upgrade these as later versions come out.
- SLF4J yes, repeatedly
- log4j 1.x (2.x is out as it doesn't handle log4j.properties files)
- avro as long as you don't propose picking up a pre-release.
- apache commons-lang,(minor -yes, major -no)
I don't know which category the AWS SDK and azure SDKs fall into. Their jackson SDK dependency flags them as a transitive troublespot.
Life would be much easier if (a) the guava team stopped taking things away and (b) either jackson stopped breaking things or someone else produced a good JSON library. I don't know of any -I have encountered worse.
2016-05-31 Update: ZK doesn't use Guava. That's curator I'm thinking of. Correction by Chris Naroth.