-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some Multiprocessing Added #2
base: main
Are you sure you want to change the base?
Conversation
Thank you for the PR! I think it would be great to have a |
args_tuple_list.append([config_path, crashing_input, extra_args, True, True, results_queue]) | ||
iterator += 1 | ||
|
||
p = mp.Process(target=multi_run_target_manager, args=(args_tuple_list,)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think mp.Process does not limit the number of concurrent processes. We want to be able to limit this number so that other fuzzing runs are not impacted by the trace generation taking up all computation.
multiprocessing.Pool
should be able to limit this number.
Given that, we can add a -n
argument similar to the fuzzware pipeline -n <num_cores>
argument which allows setting the max number of cores/parallel processes used by trace generation.
job_args_multi.append((str(config_path), str(input_path), bbl_trace_path, ram_trace_path, mmio_trace_path, | ||
bbl_set_path, mmio_set_path, extra_args, not verbose, bbl_hash_path, processed_queue)) | ||
iteration += 1 | ||
p = mp.Process(target=multi_proc_manager, args=(gen_traces,job_args_multi)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. We want to be able to limit the number of cores/parallel processes used
No description provided.