hadoop - How to allocate physical resources for a big data cluster? -


i have 3 servers , want deploy spark standalone cluster or spark on yarn cluster on servers. have questions how allocate physical resources big data cluster. example, want know whether can deploy spark master process , spark worker process on same node. why?

server details:

cpu cores: 24 memory: 128gb 

i need help. thanks.

of course can, put host master in slaves. on test server have such configuration, master machine worker node , there 1 worker-only node. ok

however aware, worker fail , cause major problem (i.e. system restart), have problem, because master afected.

edit: more info after question edit :) if using yarn (as suggested), can use dynamic resource allocation. here slides , here article mapr. long topic how configure memory given case, think these resources give knowledge it

btw. if have intalled hadoop cluster, maybe try yarn mode ;) it's out of topic of question


Comments

Popular posts from this blog

python - How to insert QWidgets in the middle of a Layout? -

python - serve multiple gunicorn django instances under nginx ubuntu -

module - Prestashop displayPaymentReturn hook url -