Skip to content

Commit

Permalink
Beta-3. Added new method of setup and grabbing files.
Browse files Browse the repository at this point in the history
Added a new way to setup and have the whole system works. To better
protect my sanity, Dr. Hampton suggest going a route of grabbing files
using wget from my own www directory, so I don't need to hardcode my
password into anything. So, instead of using wget I decided to go with
curl as it can easily overwrite files in the process of downloading them
which is what we want after-all. Also changed the way the system dies
when coming across a bad pending job status (learn something new about
uge everyday). Now it will write to a log what the status was that
caused the death, instead of mysteriously dying with exit code 20.
Updated the readme to reflect new additions. For a cosmetic appeal, I
added a faveicon.ico to be displayed on all of the pages. This is the
same one found on crc.nd.edu, so if you're an outsider using this repo
I'd suggest changing it to what your organization is.
  • Loading branch information
CodyKank committed Aug 17, 2016
1 parent 1fb7b2b commit ca4326e
Show file tree
Hide file tree
Showing 10 changed files with 188 additions and 19 deletions.
50 changes: 36 additions & 14 deletions README
Original file line number Diff line number Diff line change
Expand Up @@ -32,33 +32,55 @@
you.
First you should place queue_mapd.py on the front end you plan on using.
Once its theres, you should run it with ./queue_mapd.py --setup.
This will create setup files the setup script is looking for.
This will create setup files the setup script is looking for which contain
the nodes the queue's currently have in plain text. The setup script uses
those files to create the dir's for each of the nodes.

As of beta-3, there are now two ways you could have this run. Either:
(1) Hard-code your password into a script which gathers necesarry files
from the front-end by using sshpass and scp. The setup script for this
method also uses sshpass and scp. Or (2) Have the daemon write the necesarry
files to a location which is accessible from the web by way of curl; i.e.
/foo/bar/www/index-long.html etc. The setup script for this method also
relies on curl and the use of some form of web access.

Method (1) :
------------------------------------------------------------------------
Next you should view the setup.sh script in your favorite editor (vim)
and enter in your information where it is specified. If you can think of
a better way to do it and you understand how its working, go ahead and go
your own way. Enter both your CRC and local info. To be safe, make sure
this script is only read-able by yourself.

Once the setup script.sh script is configured, you should configure the
grab_queue_files.sh script the same way you configured the setup script.

Once its configured, run the setup script. This will create all dir's to
be used later on.

If the script completed nicely, then configure the grab_queue_files.sh
script. Do the same as before, entering in info where it is specifed
into the script itself.

Once configuring that is done, you can manually run the script to see
if everything is working. Just go to localhost in a browser to test it.
Method (2):
------------------------------------------------------------------------
View the curl_setup.sh script in your favorite text editor and enter in
any information which is required such as paths of files. If you do not
want to code in your server password, you can change the sudoers file to
allow whatever user your cronning this job as to run all of the commands
without needed a sudo password.

Now that the curl_setup.sh is configured, you should do the same with the
curl_queue_files.sh file. This is the script which will indefinetely grab
the html files for use in your webpages.

If everything appears to be working, you should next make the
grab_queue_files.sh a cron job. Do this by typing: 'crontab -e' and
going to the last line of the file under all of the comments. add the line
'*/2 * * * * PATH-TO-DIR/Queue_map/grab_queue_files.sh'
Both Methods:
-----------------------------------------------------------------------
If everything appears to be working, you should next make the non-setup
file you configured ealier into a a cron job. Do this by typing: 'crontab -e'
and going to the last line of the file under all of the comments. add the line
'*/2 * * * * PATH-TO-DIR/Queue_map/(preferred)_queue_files.sh)'
This will run the grabbing script every two minutes, which will automatically
keep your info up to date.

Besides Apache, PHP, you must have 'ssh-pass' installed on your webserver
in order for this to work correctly. Debian/Ubuntu users can:
Besides Apache, PHP, if you chose Method(1) you must have 'ssh-pass' installed
on your webserver in order for this to work correctly. Debian/Ubuntu users can:
'sudo apt-get install ssh-pass'. RPM users, I haven't tried it yet.

* This has been tested on the latest versions of Firefox, Vivaldi, and Opera. Its unknown to me
Expand All @@ -81,4 +103,4 @@
Problems with a Queue or specific node: seek crc support -
crcsupport@listserv.nd.edu

Version: 0.8-Beta-2.1
Version: 0.8-Beta-3
44 changes: 44 additions & 0 deletions curl_queue_files.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
#!/bin/bash

#Bash script to gather the html files for a Queuemap webpage. It will grab the files from a web-hosted service,
#like the www directory in my Public afs space.

#local info:
pswd="LOCAL(SERVER) PASSWORD HERE!!!" #or configure sudoers file

desired_path="DESIRED PATH FOR SERVER HERE!" #typicaly /var/www/html (you don't need the last '/')

curl_url="URL FOR CURL GOES HERE" #Don't need last '/'
#Gathering files from CRCFE using wget

curl -o index-long.html $curl_url/index-long.html

curl -o index-debug.html $curl_url/index-debug.html

curl -o pending_content.html $curl_url/pending.html

curl -o sub-debug.tar.gz $curl_url/sub-debug.tar.gz

curl -o sub-long.tar.gz $curl_url/sub-long.tar.gz

#Moving the files once on the web-server to proper locations

echo $pswd | sudo -S mv index-long.html $desired_path/Long/index-long.html

echo $pswd | sudo -S mv index-debug.html $desired_path/Debug/index-debug.html

echo $pswd | sudo -S mv pending_content.html $desired_path/Pending/pending_content.html

#Setting up node files:

tar -xzf sub-debug.tar.gz
for i in debug@*;
do
echo $pswd | sudo -S mv $i $desired_path/Debug/$i/sub-index.html
done

tar -xzf sub-long.tar.gz
for j in d6copt*;
do
echo $pswd | sudo -S mv $j $desired_path/Long/$j/sub-index.html
done
78 changes: 78 additions & 0 deletions curl_setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
#!/bin/bash

#Script to set up the CRC Queue heat-map. Thats what it's called I guess.
#8/3/16
#Requires sudo permission. This script assumes you have apache2 running and
#php7.0(or something compatible). Default path is listed below.

#This needs to be an absolute path
desired_path="/var/www/html" #you can change this if need be

#CRC info to gather files
webpage_url="URL FOR CRC FILES HERE"

#Local info to mv files to protected areas
psword="LOCAL (WEBSERVER) SUDO PASSWORD"
long_file="long_nodes.txt" #These can stay this way
debug_file="debug_nodes.txt" # "

#Creating dirs moving files to proper locations
echo "Creating Debug, Long, and Pending directories in $desired_path . . ."

echo $psword | sudo -S mkdir $desired_path/Debug
echo $psword | sudo -S mkdir $desired_path/Long
echo $psword | sudo -S mkdir $desired_path/Pending

echo "Moving index's to their rightful places . . ."
echo $psword | sudo -S cp index-long.php $desired_path/Long/index.php
echo $psword | sudo -S cp index-debug.php $desired_path/Debug/index.php
echo $psword | sudo -S cp index-pending.php $desired_path/Pending/index.php

echo "Transferring templates to $desired_path . . ."
echo $psword | sudo -S cp -r templates $desired_path/templates

echo "Transferring styles.css to $desired_path . . ."
echo $psword | sudo -S cp styles.css $desired_path/

echo "Gathering node-list files from $webpage_url . . ."
curl -o debug_nodes.txt $webpage_url/debug_node_list.html
curl -o long_nodes.txt $webpage_url/long_node_list.html

# Creating and initialiizing each node's dir etc
#Long-queue nodes
echo "Creating each node's dir at $desired_path/Long . . ."
while IFS= read -r line
do
echo $psword | sudo -S mkdir $desired_path/Long/$line
echo $psword | sudo -S cp sub-index.php $desired_path/Long/$line/index.php
done < "$long_file"

echo "Creating each node's dir at $desired_path/Debug . . ."

#Debug-queue nodes
while IFS= read -r line
do
echo $psword | sudo -S mkdir $desired_path/Debug/$line
echo $psword | sudo -S cp sub-index.php $desired_path/Debug/$line/index.php
done < "$debug_file"


echo "-----------------------COMPLETE-----------------------"
echo ""

echo "Setup complete. Please quickly verify everything was made correctly."
echo "You can do this by opening a browser and going to localhost and navigating"
echo "to your Long or Debug directories."
echo ""
echo "Please be sure that the python script is running on a front end, and that"
echo "all scripts are configured to the location the script is going to be spitting"
echo "out at."
echo ""
echo "Once you know things are where they should be, make sure you configured the"
echo "grab_queue_files.sh script for your info to grab the files."
echo "If it is configured already, either of the two lines to crontab -e:"
echo "If you chose method(1) as described in README, add this to cron:"
echo "*/2 * * * * $(pwd)/grab_queue_files.sh"
echo ""
echo "If you chose method(2), add this to cron:"
echo "*/2 * * * * $(pwd)/curl_queue_files.sh"
26 changes: 22 additions & 4 deletions queue_mapd.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from that information for a 'heat' map of the queue. This partial page is a component to
be included from index.php on the current web-server. There are two other components,
header.html and footer.html for each: Debug and Long. Latest update:
Aug 12th, 2016 v0.8.beta-2.1
Aug 17th, 2016 v0.8.1-beta-3
Exit codes: 0 - Good
20 - Bad Pending Job status"""

Expand Down Expand Up @@ -197,7 +197,8 @@ def set_status(self, status):
elif status == 'Eqw':
status = 'Error'
else:
sys.exit(20)
#sys.exit(20)
write_log(status, 20)
self.status = status
return

Expand All @@ -216,6 +217,8 @@ def get_date(self):
#^--------------------------------------------------------- class Pending(Job)

#If you change the names here, don't forget to change them in cron job script and php files on webserver!
#If you are using a curl method of obtaining files, then make sure you change path to your www dir etc!
#(as in afs/crc.nd.edu/user/j/jdoe/www/index-long.html)
LONG_SAVE_FILE = 'index-long.html'
DEBUG_SAVE_FILE = 'index-debug.html'
PENDING_SAVE_FILE = 'pending.html'
Expand Down Expand Up @@ -512,9 +515,9 @@ def tar_node_files(node_list, Queue):
why this script should be running in its own directory."""

if Queue == 'Long':
save_name = 'sub-long.tar.gz'
save_name = '/afs/crc.nd.edu/user/c/ckankel/www/sub-long.tar.gz'
else:
save_name = 'sub-debug.tar.gz'
save_name = '/afs/crc.nd.edu/user/c/ckankel/www/sub-debug.tar.gz'

tar = tarfile.open(save_name, 'w:gz')
for node in node_list:
Expand Down Expand Up @@ -565,6 +568,21 @@ def write_setup_files(node_list, queue_name):
return
#^--------------------------------------------------------- write_setup_files(node_list)

def write_log(info, code):
"""Method to write to a log if an error occurs and the program dies."""
log_name = 'queue_mapd.log'
file = open(log_name, 'a')
date = subprocess.getoutput('date')

if int(code) == 20:
content = 'I am {0}, and have died because of bad pending job status, with {1} as the attempted status on {2}'.format(sys.argv[0], status, date)
else:
content = 'I am {0}, but I do not know how I got to the point of writing a log...'.format(sys.argv[0])

file.write(content)
sys.exit(code)
#^--------------------------------------------------------- write_log(info, code)

def show_usage():
"""Method to display how to use this script on stdout"""

Expand Down
4 changes: 4 additions & 0 deletions setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -75,4 +75,8 @@ echo ""
echo "Once you know things are where they should be, make sure you configured the"
echo "grab_queue_files.sh script for your info to grab the files."
echo "If it is configured already, add this line to crontab -e:"
echo "If you chose method(1) as described in README, add this to cron:"
echo "*/2 * * * * $(pwd)/grab_queue_files.sh"
echo ""
echo "If you chose method(2), add this to cron:"
echo "*/2 * * * * $(pwd)/curl_queue_files.sh"
Binary file added templates/favicon.ico
Binary file not shown.
2 changes: 1 addition & 1 deletion templates/footer.html
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
</div>
<div id="footer">
<p>v0.8-beta-2</p>
<p>v0.8-beta-3</p>
</div>
</body>
</html>
1 change: 1 addition & 0 deletions templates/header.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
<title>CRC Queue Status</title>
<link rel="stylesheet" href="../styles.css">
<META HTTP-EQUIV="refresh" CONTENT="60">
<link rel="shortcut icon" type="image/x-icon" href="../templates/favicon.ico" />
</head>
<body>
<div id="header">
Expand Down
1 change: 1 addition & 0 deletions templates/pending-header.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
<title>CRC Pending Jobs</title>
<link rel="stylesheet" href="../styles.css">
<META HTTP-EQUIV="refresh" CONTENT="60">
<link rel="shortcut icon" type="image/x-icon" href="../templates/favicon.ico" />
</head>
<body>
<div id="header">
Expand Down
1 change: 1 addition & 0 deletions templates/sub-header.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
<title>CRC Node Status</title>
<link rel="stylesheet" href="../../styles.css">
<META HTTP-EQUIV="refresh" CONTENT="60">
<link rel="shortcut icon" type="image/x-icon" href="../../templates/favicon.ico" />
</head>
<body>
<div id="header">
Expand Down

0 comments on commit ca4326e

Please sign in to comment.