I have just read the “Lakehouse: A New Generation of Open Platforms that Unify Data Warehousing and Advanced Analytics” paper and decided to write a short blog post going through some of the key moments of the paper’s motivation. Let’s start.
Tag Archives: Spark
Hadoop: The end of an Era
Hadoop
I’d start with a bold statement: Hadoop is rapidly losing the momentum. We can see it from the following Google Trends chart:
Continue readingApache Spark Future
Everyone around the internet is constantly talking about the bright future of Apache Spark. How cool it is, how innovative it is, how fast it is moving, how big its community is, how big the investments into it are, etc. But what is really hiding behind this enthusiasm of Spark adepts, and what is the real future of Apache Spark?
In this article I show you the real data and real trends, trying to be as agnostic and unbiased as possible. This article is not affiliated with any vendor.
Modern Data Architecture Talk
Here is the video of my talk on Modern Data Architecture from Java Day Kiev 2015
The slides are available here: Modern Data Architecture – JD Kiev v05
Spark Architecture Video
This is the talk I made on Java Day Kiev 2015. It was a great conference after all
Spark Memory Management
Starting Apache Spark version 1.6.0, memory management model has changed. The old memory management model is implemented by StaticMemoryManager class, and now it is called “legacy”. “Legacy” mode is disabled by default, which means that running the same code on Spark 1.5.x and 1.6.0 would result in different behavior, be careful with that. For compatibility, you can enable the “legacy” model with spark.memory.useLegacyMode parameter, which is turned off by default.
Previously I have described the “legacy” model of memory management in this article about Spark Architecture almost one year ago. Also I have written an article on Spark Shuffle implementations that briefly touches memory management topic as well.
This article describes new memory management model used in Apache Spark starting version 1.6.0, which is implemented as UnifiedMemoryManager.
MPP vs Hadoop Talk
Today I had a great talk at the Hadoop User Group Ireland meetup in Dublin, and it was an adapted and refactored version of the article on the same subject, MPP vs Hadoop. Here are the slides:
Feel free to comment and share your opinion on this subject
Apache HAWQ Architecture Talk
Finally I have translated my talk from Highload++ 2015 conference in Moscow into English, so now you can enjoy the fresh information about the Apache HAWQ internals!
If you’d like to download the slides, you can find them here: HAWQ Architecture HL++ 2015 Moscow
Spark Architecture Talk
Here are the slides for the talk I just gave at JavaDay Kiev about the architecture of Apache Spark, its internals like memory management and shuffle implementation:
If you’d like to download the slides, you can find them here: Spark Architecture – JD Kiev v04
Spark Architecture: Shuffle
This is my second article about Apache Spark architecture and today I will be more specific and tell you about the shuffle, one of the most interesting topics in the overall Spark design. The previous part was mostly about general Spark architecture and its memory management. It can be accessed here. The next one is about Spark memory management and it is available here.
What is the shuffle in general? Imagine that you have a list of phone call detail records in a table and you want to calculate amount of calls happened each day. This way you would set the “day” as your key, and for each record (i.e. for each call) you would emit “1” as a value. After this you would sum up values for each key, which would be an answer to your question – total amount of records for each day. But when you store the data across the cluster, how can you sum up the values for the same key stored on different machines? The only way to do so is to make all the values for the same key be on the same machine, after this you would be able to sum them up.