2d1e4144b3
* fix: prevent crashing when napi_register_module_v1 is called twice Currently napi-rs addons can lead to the Node.js process aborting with the following error when initialising the addon on Windows: ``` c:\ws\src\cleanup_queue-inl.h:32: Assertion `(insertion_info.second) == (true)' failed. ``` This happens because `napi_add_env_cleanup_hook` must not be called with the same arguments multiple times unless the previously scheduled cleanup hook with the same arguments was already executed. However, the cleanup hook added by `napi_register_module_v1` in napi-rs on Windows was always created with `ptr::null_mut()` as an argument. One case where this causes a problem is when using the addon from multiple contexts (e.g. Node.js worker threads) at the same time. However, Node.js doesn't provide any guarantees that the N-API addon initialisation code will run only once even per thread and context. In fact, it's totally valid to run `process.dlopen()` multiple times from JavaScript land in Node.js, and this will lead to the initialisation code being run multiple times as different `exports` objects may need to be populated. This may happen in numerous cases, e.g.: - When it's not possible or not desirable to use `require()` and users must resort to using `process.dlopen()` (one use case is passing non-default flags to `dlopen(3)`, another is ES modules). Caching the results of `process.dlopen()` to avoid running it more than once may not always be possible reliably in all cases (for example, because of Jest sandbox). - When the `require` cache is cleared. - On Windows: `require("./addon.node")` and then `require(path.toNamespacedPath("./addon.node"))`. Another issue is fixed inside `napi::tokio_runtime::drop_runtime`: there's no need to call `napi_remove_env_cleanup_hook` (it's only useful to cancel the hooks that haven't been executed yet). Null pointer retrieved from `arg` was being passed as the `env` argument of that function, so it didn't do anything and just returned `napi_invalid_arg`. This patch makes `napi_register_module_v1` use a counter as the cleanup hook argument, so that the value is always different. An alternative might have been to use a higher-level abstraction around `sys::napi_env_cleanup_hook` that would take ownership of a boxed closure, if there is something like this in the API already. Another alternative could have been to heap-allocate a value so that we would have a unique valid memory address. The patch also contains a minor code cleanup related to `RT_REFERENCE_COUNT` along the way: the counter is encapsulated inside its module and `ensure_runtime` takes care of incrementing it, and less strict memory ordering is now used as there's no need for `SeqCst` here. If desired, it can be further optimised to `Ordering::Release` and a separate acquire fence inside the if statement in `drop_runtime`, as `AcqRel` for every decrement is also a bit stricter than necessary (although simpler). These changes are not necessary to fix the issue and can be extracted to a separate patch. At first it was tempting to use the loaded value of `RT_REFERENCE_COUNT` as the argument for the cleanup hook but it would have been wrong: a simple counterexample is the following sequence: 1. init in the first context (queue: 0) 2. init in the second context (queue: 0, 1) 3. destroy the first context (queue: 1) 4. init in the third context (queue: 1, 1) * test(napi): unload test was excluded unexpected --------- Co-authored-by: LongYinan <lynweklm@gmail.com>
114 lines
3.4 KiB
Rust
114 lines
3.4 KiB
Rust
use std::{future::Future, sync::RwLock};
|
|
|
|
use once_cell::sync::Lazy;
|
|
use tokio::runtime::Runtime;
|
|
|
|
use crate::{sys, JsDeferred, JsUnknown, NapiValue, Result};
|
|
|
|
fn create_runtime() -> Option<Runtime> {
|
|
#[cfg(not(target_arch = "wasm32"))]
|
|
{
|
|
let runtime = tokio::runtime::Runtime::new().expect("Create tokio runtime failed");
|
|
Some(runtime)
|
|
}
|
|
|
|
#[cfg(target_arch = "wasm32")]
|
|
{
|
|
tokio::runtime::Builder::new_current_thread()
|
|
.enable_all()
|
|
.build()
|
|
.ok()
|
|
}
|
|
}
|
|
|
|
pub(crate) static RT: Lazy<RwLock<Option<Runtime>>> = Lazy::new(|| RwLock::new(create_runtime()));
|
|
|
|
#[cfg(windows)]
|
|
static RT_REFERENCE_COUNT: std::sync::atomic::AtomicUsize = std::sync::atomic::AtomicUsize::new(0);
|
|
|
|
/// Ensure that the Tokio runtime is initialized.
|
|
/// In windows the Tokio runtime will be dropped when Node env exits.
|
|
/// But in Electron renderer process, the Node env will exits and recreate when the window reloads.
|
|
/// So we need to ensure that the Tokio runtime is initialized when the Node env is created.
|
|
#[cfg(windows)]
|
|
pub(crate) fn ensure_runtime() {
|
|
use std::sync::atomic::Ordering;
|
|
|
|
let mut rt = RT.write().unwrap();
|
|
if rt.is_none() {
|
|
*rt = create_runtime();
|
|
}
|
|
|
|
RT_REFERENCE_COUNT.fetch_add(1, Ordering::Relaxed);
|
|
}
|
|
|
|
#[cfg(windows)]
|
|
pub(crate) unsafe extern "C" fn drop_runtime(_arg: *mut std::ffi::c_void) {
|
|
use std::sync::atomic::Ordering;
|
|
|
|
if RT_REFERENCE_COUNT.fetch_sub(1, Ordering::AcqRel) == 1 {
|
|
RT.write().unwrap().take();
|
|
}
|
|
}
|
|
|
|
/// Spawns a future onto the Tokio runtime.
|
|
///
|
|
/// Depending on where you use it, you should await or abort the future in your drop function.
|
|
/// To avoid undefined behavior and memory corruptions.
|
|
pub fn spawn<F>(fut: F) -> tokio::task::JoinHandle<F::Output>
|
|
where
|
|
F: 'static + Send + Future<Output = ()>,
|
|
{
|
|
RT.read().unwrap().as_ref().unwrap().spawn(fut)
|
|
}
|
|
|
|
/// Runs a future to completion
|
|
/// This is blocking, meaning that it pauses other execution until the future is complete,
|
|
/// only use it when it is absolutely necessary, in other places use async functions instead.
|
|
pub fn block_on<F>(fut: F) -> F::Output
|
|
where
|
|
F: 'static + Send + Future<Output = ()>,
|
|
{
|
|
RT.read().unwrap().as_ref().unwrap().block_on(fut)
|
|
}
|
|
|
|
// This function's signature must be kept in sync with the one in lib.rs, otherwise napi
|
|
// will fail to compile with the `tokio_rt` feature.
|
|
|
|
/// If the feature `tokio_rt` has been enabled this will enter the runtime context and
|
|
/// then call the provided closure. Otherwise it will just call the provided closure.
|
|
#[inline]
|
|
pub fn within_runtime_if_available<F: FnOnce() -> T, T>(f: F) -> T {
|
|
let _rt_guard = RT.read().unwrap().as_ref().unwrap().enter();
|
|
f()
|
|
}
|
|
|
|
#[allow(clippy::not_unsafe_ptr_arg_deref)]
|
|
pub fn execute_tokio_future<
|
|
Data: 'static + Send,
|
|
Fut: 'static + Send + Future<Output = Result<Data>>,
|
|
Resolver: 'static + Send + Sync + FnOnce(sys::napi_env, Data) -> Result<sys::napi_value>,
|
|
>(
|
|
env: sys::napi_env,
|
|
fut: Fut,
|
|
resolver: Resolver,
|
|
) -> Result<sys::napi_value> {
|
|
let (deferred, promise) = JsDeferred::new(env)?;
|
|
|
|
let inner = async move {
|
|
match fut.await {
|
|
Ok(v) => deferred.resolve(|env| {
|
|
resolver(env.raw(), v).map(|v| unsafe { JsUnknown::from_raw_unchecked(env.raw(), v) })
|
|
}),
|
|
Err(e) => deferred.reject(e),
|
|
}
|
|
};
|
|
|
|
#[cfg(not(target_arch = "wasm32"))]
|
|
spawn(inner);
|
|
|
|
#[cfg(target_arch = "wasm32")]
|
|
block_on(inner);
|
|
|
|
Ok(promise.0.value)
|
|
}
|