I'm pretty new to Spark as well, but I think you might be mixing apples and oranges here. Your Spark master is the "controller" machine for your Spark cluster, so where is your Spark master running? If it's not on the same machine as your
Java server, then it won't be "localhost" for the servlet, so you'll need to provide the appropriate URL when you request the Spark context. Also, it looks like your Spark libraries are not available to your servlet - I don't use Java with Spark, so you'll need to check up which JARs etc you need on your application server e.g.
http://stackoverflow.com/questions/23752425/how-to-use-the-spark-cluster-computing-function-in-servlets
More generally, my understanding is that the point of Spark is to distribute processing over multiple machines, so what benefits do you hope to gain from caching your data on your app server? Spark has its own caching mechanism via
RDD persistence so would this be more relevant to your needs?
One thing you really want to avoid with Spark and/or Hadoop is moving lots of data around e.g. in/out of cache. Process the data in situ where possible, and only move (or cache) the results.