-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
paramiko.rsakey.RSAKey object ERROR on 200 node spot instance cluster creation #78
Comments
Some questions:
|
I am running OS X Here's my config: provider: ec2 providers: launch: install-hdfs: Trueinstall-spark: False |
We have pretty much the same setup, minus the instance type. So I just launched a 2-node cluster with The error message you're seeing suggests something about SSH is borked. Are you able to just SSH into a new EC2 instance you launched, completely outside of Flintrock? |
I cannot reproduce that exact error. On my Mac, I tried setting sudo launchctl limit maxfiles 1000000 1000000 I am going to try it again while I am in the venv shell |
Yeah, if you installed Flintrock into a virtual environment, you need to always run it from within that virtual environment. The "Too many open files" error smells like something specific to your system. Let me know if you can pare these problems down to something small and specific, and I'll try again to reproduce it on my side. |
Okay, I got it working. I am running Yosemite and it turns out the way file descriptor counts are changed is different. The following page helped me: http://blog.mact.me/2014/10/22/yosemite-upgrade-changes-open-file-limit I wonder if the original issue was a symptom of this problem. |
Not sure if one issue was ultimately caused by other, but good to see you cleared things up. If you re-run into your original issue, feel free to reopen with more detail to help me reproduce the issue on my side. |
I checked out the latest code from last night (1/28/2016) and rebuilt everything.
I did the following:
Responding Y to the terminate cluster prompt did end up terminating the cluster.
The text was updated successfully, but these errors were encountered: