• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Paul Clapham
  • Liutauras Vilda
  • Knute Snortum
  • Bear Bibeault
Sheriffs:
  • Devaka Cooray
  • Jeanne Boyarsky
  • Junilu Lacar
Saloon Keepers:
  • Ron McLeod
  • Stephan van Hulst
  • Tim Moores
  • Carey Brown
  • salvin francis
Bartenders:
  • Tim Holloway
  • Piet Souris
  • Frits Walraven

How were machine programs run on huge volume of data set before Spark ML?

 
Ranch Hand
Posts: 1363
8
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Spark has Spark ML which is used for running machine learning programs when the data set is of huge volume. Running that without Spark would take long time . Before Spark, the data processing programs on huge volume of data was run using map reduce programs but what about machine learning programs on huge volume dataset.  How was it done on large dataset  before Spark ML?

Thanks
 
Shiny ad:
Java file APIs (DOC, XLS, PDF, and many more)
https://products.aspose.com/total/java
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!