use scoped_threadpool::Pool;
use std::sync::Mutex;
const N: u32 = 3;
fn main() {
let mut pool = Pool::new(N);
let data_mutex = Mutex::new(vec![0, 1, 2, 3, 4]);
let res_mutex = Mutex::new(0);
pool.scoped(|scoped| {
for _ in 0..N {
scoped.execute(|| {
let mut data = data_mutex.lock().unwrap();
let result = data.iter().fold(0, |acc, x| acc + x * 2);
data.push(result);
*res_mutex.lock().unwrap() += result;
});
}
});
println!("{:?}", res_mutex);
}
I kept the variable names similar to hopefully make it easy to see the transformation. Here, instead of using thread::spawn, we use a scoped threadpool. Effectively, rather than saying "take this closure and run it on a thread" like thread::spawn does, this uses "scoped.execute", which is like spawn, but ties the lifetime to the variable "scoped". This lets the compiler understand that all of these threads will be joined inside the given scope, and so it's able to grok the lifetimes. That variable is zero-sized and so will compile away to nothing.(You still need the mutexes because multiple threads are writing to the same variables at the same time; imagine if we didn't push the result onto the vector; we could drop the mutex around it entirely, which would simplify things even further. See the example here, where because we are accessing disjoint parts of the vector, we can use no mutexes at all: https://crates.io/crates/scoped_threadpool