Scale R to Big Data Using Hadoop and Spark

Outline:

· Setup a Spark cluster with R installed (R server).
· Wrangle data that is inside HDFS using R.
· Build and deploy a machine learning model using R.

R is currently one of the most popular data science languages in the world. However, it’s always had constraints around scaling out to big data. What happens when you expand beyond a couple gigabytes of data? You packed up your data and you used something else; Python, Java, or Mahout to name a few. Now it’s possible to stick with R throughout your production analysis all the way to deployment, regardless of the data size.

Companies like Apache, Revolution Analytics, Microsoft, and H20 showed us this year that distributed computing in R is possible. Today we’ll take a look at what the Microsoft stack is doing in terms of scaling R up to big data.

In this talk we will show you Microsoft R Server, which is a Hadoop or Spark cluster where R is installed on every computer and is equipped with distributed processing libraries to utilize each and every computer in parallel. We’ll show you how to run your normal native R code via SSH, and how to get an RStudio server up and running on the cluster.

We’ll show you how to wrangle data out of an HDFS and build machine learning models from your large dataset. Then show you how to pack up that model and deploy it to an elastically scaled web service so that anyone may call upon it for predictions and insights.

Code and Prep Work (if you want to follow along):
https://github.com/datasciencedojo/meetup/tree/master/scaling_r_to_big_data

(138)

About The Author
- Data Science Dojo is a paradigm shift in data science learning. We enable all professionals (and students) to extract actionable insights from data.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>