01-集群空间被沾满导致sparksql执行失败异常

xiaoxiao2021-02-28  18

异常log: Caused by: java.sql.SQLException: org.apache.spark.SparkException: Job aborted due to stage failure: Task 799 in stage 9537.0 failed 4 times, most recent failure: Lost task 799.3 in stage 9537.0 (TID 14699170, node37.tj.leap.com): java.io.IOException: No space left on device 解决方案: 1.将yarn产生的usercache目录删除,临时得以解决 2.没有从根本上解决问题,数据量太大,空间太小了,下次还会发生类似问题,扩展集群,再做balance才能从根本上解决问题
转载请注明原文地址: https://www.6miu.com/read-1450117.html

最新回复(0)