Add content for spark and mapreduce (#2649)

* Update 100-hadoop-spark-mapreduce.md

* Update content/roadmaps/114-software-architect/content/109-working-with-data/100-hadoop-spark-mapreduce.md

Co-authored-by: Kamran Ahmed <kamranahmed.se@gmail.com>
pull/2667/head
Tomasz Hamerla 2 years ago committed by GitHub
parent e8b23415be
commit 167cd44095
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 9
      content/roadmaps/114-software-architect/content/109-working-with-data/100-hadoop-spark-mapreduce.md

@ -1 +1,8 @@
# Hadoop spark mapreduce
# Spark, Hadoop MapReduce
[Apache Spark](https://spark.apache.org/) is a data processing framework that can quickly perform processing tasks on very large data sets, and can also distribute data processing tasks across multiple computers, either on its own or in tandem with other distributed computing tools.
Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.
<BadgeLink colorScheme='yellow' badgeText='Read' href='https://www.integrate.io/blog/apache-spark-vs-hadoop-mapreduce'>Spark vs Hadoop MapReduce</BadgeLink>
<BadgeLink badgeText='Watch' href='https://www.youtube.com/watch?v=aReuLtY0YMI'>Hadoop explained in 5 minutes</BadgeLink>

Loading…
Cancel
Save