博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Spark内存管理
阅读量:6406 次
发布时间:2019-06-23

本文共 3378 字,大约阅读时间需要 11 分钟。

本文基于Spark 1.6.0之后的版本

Spark 1.6.0引入了对堆外内存的管理并对内存管理模型进行了改进,。

从物理上,分为堆内内存和堆外内存;从逻辑上分为execution内存和storage内存。

Execution内存主要是用来满足task执行过程中某些算子对内存的需求,例如shuffle过程中map端产生的中间结果需要缓存在内存中。
Storage内存主要用来存储RDD持久化的数据或者广播变量。

Off-heap内存

通过下面的代码片段(spark2.1版本),可以清楚的知道execution内存和storage内存是如何分配Off-heap内存的。

protected[this] val maxOffHeapMemory = conf.getSizeAsBytes("spark.memory.offHeap.size", 0)  protected[this] val offHeapStorageMemory =    (maxOffHeapMemory * conf.getDouble("spark.memory.storageFraction", 0.5)).toLong  offHeapExecutionMemoryPool.incrementPoolSize(maxOffHeapMemory - offHeapStorageMemory)  offHeapStorageMemoryPool.incrementPoolSize(offHeapStorageMemory)

off-heap内存分配

On-heap内存

对于on-heap内存的划分如下图

on-heap内存分配

  • 总内存

    spark2.1中通过下面的代码获取
    scala val systemMemory = conf.getLong("spark.testing.memory", Runtime.getRuntime.maxMemory)

  • 系统预留内存

    预留内存在代码中是一个常量RESERVED_SYSTEM_MEMORY_BYTES指定为300M

    这里要求总内存至少是预留内存的1.5倍val minSystemMemory = (reservedMemory * 1.5).ceil.toLong
    并且会做如下的检测
    scala if (systemMemory < minSystemMemory) { throw new IllegalArgumentException(s"System memory $systemMemory must " + s"be at least $minSystemMemory. Please increase heap size using the --driver-memory " + s"option or spark.driver.memory in Spark configuration.") } // SPARK-12759 Check executor memory to fail fast if memory is insufficient if (conf.contains("spark.executor.memory")) { val executorMemory = conf.getSizeAsBytes("spark.executor.memory") if (executorMemory < minSystemMemory) { throw new IllegalArgumentException(s"Executor memory $executorMemory must be at least " + s"$minSystemMemory. Please increase executor memory using the " + s"--executor-memory option or spark.executor.memory in Spark configuration.") } }

  • Spark可用内存

    Spark可用总内存=(系统内存-预留内存)*spark.memory.fraction

    val usableMemory = systemMemory - reservedMemory val memoryFraction = conf.getDouble("spark.memory.fraction", 0.6) (usableMemory * memoryFraction).toLong

  • Storage内存

    Storage内存=Spark可用内存*spark.memory.storageFraction
    scala onHeapStorageRegionSize = (maxMemory * conf.getDouble("spark.memory.storageFraction", 0.5)).toLong

  • Execution内存

    Execution内存=Spark可用内存-Storage内存

    private[spark] class UnifiedMemoryManager private[memory] (  conf: SparkConf,  val maxHeapMemory: Long,  onHeapStorageRegionSize: Long,  numCores: Int)extends MemoryManager(  conf,  numCores,  onHeapStorageRegionSize,  maxHeapMemory - onHeapStorageRegionSize)
  • Storage内存与Execution内存的动态调整

    Storage can borrow as much execution memory as is free until execution reclaims its space. When this happens, cached blocks will be evicted from memory until sufficient borrowed memory is released to satisfy the execution memory request.

Similarly, execution can borrow as much storage memory as is free. However, execution memory is never evicted by storage due to the complexities involved in implementing this. The implication is that attempts to cache blocks may fail if execution has already eaten up most of the storage space, in which case the new blocks will be evicted immediately according to their respective storage levels.

上面这段文字是Spark官方对内存调整的注释,总结有如下几点

- 当execution内存有空闲的时候,storage可以借用execution的内存;当execution需要内存的时候, storage会释放借用的内存。这样做是安全的,因为storage内存如果不够可以溢出到本地磁盘。

- 当storage内存有空闲的时候也可以借给execution使用,但是当execution没有使用完的情况下是无法归还给storage的。因为execution是用来在计算过程中存储临时结果的,如果内存被释放会导致后续的计算失败。
  • user可支配内存

    这部分内存完全由用户来支配,例如存储用户自定义的数据结构。


更多更好的文章请关注

转载于:https://www.cnblogs.com/woople/p/6839367.html

你可能感兴趣的文章
JS中的作用域链
查看>>
快速回到顶部
查看>>
20160217002 微信公众平台开发接入指南
查看>>
Emmet基本使用教程
查看>>
aapt 命令可应用于查看apk包名、主activity、版本等很多信息
查看>>
QTP菜单消失的解决办法
查看>>
网络资料查找记录
查看>>
nodejs mysql 执行多条sql语句
查看>>
Spring/Hibernate/Proxool集成
查看>>
struts2中出现过的各种问题
查看>>
常用JS转码算法
查看>>
Windows 开机自动运行
查看>>
BigDecimal+BigInteger
查看>>
WLAN 802.11 a/b/g PHY Specification and EDVT Measurement III
查看>>
Centos搭建MongoDB环境
查看>>
移动端学习目录
查看>>
poj1491
查看>>
poj2679
查看>>
2013 ACM/ICPC 杭州网络赛C题
查看>>
poj3480
查看>>