come from https://stackoverflow.com/questions/32245498/sparkexception-master-removed-our-application
answer:
As mentioned in the attempts, the root cause is a timeout between the master node, and one or more workers.
Another thing to try: Verify that all workers are reachable by hostname from the master, either via dns or an entry in the /etc/hosts file.
In my case, the problem was that the cluster was running in an AWS subnet without DNS. The cluster grew over time by spinning up a node, the adding the node to the cluster. When the master was built, only a subset of the addresses in the cluster was known, and only that subset was added to the /etc/hosts file. When dse spark was run from a "new" node, then communication from the master using the worker's hostname failed and the master killed the job.
我的解决方案是,重启zookeeper和kafka